1,880 research outputs found
Learning from Survey Propagation: a Neural Network for MAX-E--SAT
Many natural optimization problems are NP-hard, which implies that they are
probably hard to solve exactly in the worst-case. However, it suffices to get
reasonably good solutions for all (or even most) instances in practice. This
paper presents a new algorithm for computing approximate solutions in
for the Maximum Exact 3-Satisfiability (MAX-E--SAT) problem by
using deep learning methodology. This methodology allows us to create a
learning algorithm able to fix Boolean variables by using local information
obtained by the Survey Propagation algorithm. By performing an accurate
analysis, on random CNF instances of the MAX-E--SAT with several Boolean
variables, we show that this new algorithm, avoiding any decimation strategy,
can build assignments better than a random one, even if the convergence of the
messages is not found. Although this algorithm is not competitive with
state-of-the-art Maximum Satisfiability (MAX-SAT) solvers, it can solve
substantially larger and more complicated problems than it ever saw during
training
The backtracking survey propagation algorithm for solving random K-SAT problems
Discrete combinatorial optimization has a central role in many scientific
disciplines, however, for hard problems we lack linear time algorithms that
would allow us to solve very large instances. Moreover, it is still unclear
what are the key features that make a discrete combinatorial optimization
problem hard to solve. Here we study random K-satisfiability problems with
, which are known to be very hard close to the SAT-UNSAT threshold,
where problems stop having solutions. We show that the backtracking survey
propagation algorithm, in a time practically linear in the problem size, is
able to find solutions very close to the threshold, in a region unreachable by
any other algorithm. All solutions found have no frozen variables, thus
supporting the conjecture that only unfrozen solutions can be found in linear
time, and that a problem becomes impossible to solve in linear time when all
solutions contain frozen variables.Comment: 11 pages, 10 figures. v2: data largely improved and manuscript
rewritte
Waves and vortices in the inverse cascade regime of stratified turbulence with or without rotation
We study the partition of energy between waves and vortices in stratified
turbulence, with or without rotation, for a variety of parameters, focusing on
the behavior of the waves and vortices in the inverse cascade of energy towards
the large scales. To this end, we use direct numerical simulations in a cubic
box at a Reynolds number Re=1000, with the ratio between the
Brunt-V\"ais\"al\"a frequency N and the inertial frequency f varying from 1/4
to 20, together with a purely stratified run. The Froude number, measuring the
strength of the stratification, varies within the range 0.02 < Fr < 0.32. We
find that the inverse cascade is dominated by the slow quasi-geostrophic modes.
Their energy spectra and fluxes exhibit characteristics of an inverse cascade,
even though their energy is not conserved. Surprisingly, the slow vortices
still dominate when the ratio N/f increases, also in the stratified case,
although less and less so. However, when N/f increases, the inverse cascade of
the slow modes becomes weaker and weaker, and it vanishes in the purely
stratified case. We discuss how the disappearance of the inverse cascade of
energy with increasing N/f can be interpreted in terms of the waves and
vortices, and identify three major effects that can explain this transition
based on inviscid invariants arguments
On the emergence of helicity in rotating stratified turbulence
We perform numerical simulations of decaying rotating stratified turbulence
and show, in the Boussinesq framework, that helicity (velocity-vorticity
correlation), as observed in super-cell storms and hurricanes, is spontaneously
created due to an interplay between buoyancy and rotation common to large-scale
atmospheric and oceanic flows. Helicity emerges from the joint action of eddies
and of inertia-gravity waves (with inertia and gravity with respective
associated frequencies and ), and it occurs when the waves are
sufficiently strong. For the amount of helicity produced is correctly
predicted by a quasi-linear balance equation. Outside this regime, and up to
the highest Reynolds number obtained in this study, namely ,
helicity production is found to be persistent for as large as , and for and respectively as large as and
.Comment: 10 pages, 5 figure
Servitización y mecanismos de autorrefuerzo territorial: un nuevo enfoque a la competitividad regional
The present paper discusses a theoretical model to explain the link between servitization and territorial competitiveness based on the situation in Italy. A key assumption of the model is that once the link between manufacturing and KIBS is established within a TES, there is a positive feedback between the increasing productivity (competitiveness) and the link between firms and KIBS, which becomes stronger and stronger triggering a self-reinforcing dynamic. This means that every evolutionary step of the system influences the next and thus the evolution of the entire system, so generating path dependence. Such a system has a high number of asymptotic states, and the initial state (time zero), unforeseen shocks, or other kinds of fluctuations, can lead the system into any of the different domains of the asymptotic states (1). In other words, both the theoretical assumptions and the empirical model outlined in this paper demonstrate that when a functional relationship between manufacturing and services is established (servitization), economic performance is positive or very positive.El presente artÃculo analiza un modelo teórico para explicar el vÃnculo entre servitización y competitividad territorial basado en estudio empÃrico de tal relación en Italia.
Un supuesto clave es que, una vez que se establece el vÃnculo entre la manufactura y KIBS dentro de un TES, hay una retroalimentación positiva entre el aumento de la productividad (competitividad) y el vÃnculo entre las empresas productoras y los KIBS se genera una dinámica de retroalimentación positiva que conduce a un aumento de la productividad (competitividad). Esto implica que cada paso evolutivo del sistema influye en el siguiente y, por lo tanto, en la evolución de todo el sistema, se generan interdependencias (generando asà dependencia del camino). Tal sistema tiene un alto número de estados asintóticos, y durante el estado inicial (tiempo cero), choques imprevistos u otros tipos de fluctuaciones, pueden llevar al sistema a cualquiera de los diferentes dominios de los estados asintóticos (1). En otras palabras, tanto los supuestos teóricos como el modelo empÃrico esbozado en este trabajo demuestran que cuando se establece una relación funcional entre manufactura y servicios (servitización), el desempeño económico es positivo o muy positivo
Multivariate meta-analysis of QTL mapping studies
A large number of quantitative trait loci (QTLs) for milk production and quality traits in dairy
cattle has been reported in literature. The large amount of information available could be exploited
by meta-analyses to draw more general conclusions from results obtained in different experimental
conditions (animals, statistical methodologies). QTL meta-analyses have been carried out to estimate
the distribution of QTL effects in livestock and to find consensus on QTL position. In this study, multivariate
dimension reduction techniques are used to analyse a database of dairy cattle QTL published
results, in order to extract latent variables able to characterise the research. A total of 92 papers by 72
authors were found on 25 scientific Journals for the period January 1995-February 2008. More than
thirty parameters were picked up from the articles. To overcome the problem of different map location,
the flanking markers were mapped on release 4.1 of the Bos taurus genome sequence (www.ensembl.
org). Their position was retrieved from public databases and, when absent, was calculated in silico
by blasting (http://blast.wustl.edu/) the markers’ nucleotide sequence against the genomic sequence.
Records were discarded if flanking markers or P-values were not available. After these edits, the final
archive consisted of 1,162 records. Seven selected variables were analysed both with the Factor Analysis
(FA), combined with the varimax rotation technique, and Principal Component Analysis (PCA). FA
was able to explain 68% of the original variability with 3 latent factors: the first factor extracted was
highly associated (factor loading of 0.98) to marker location along the chromosome and could be considered
as a marker map index; the second factor showed factor loadings of 0.74 and 0.84 related to the
variable number of animals involved and year of the experiment, respectively, and it can be regarded
as an indicator of the dimension of the study; the third factor was correlated to the significance level
of the statistical test (0.78), number of families (0.63), and, negatively, to the marker density (-0.43). It
can be named as index of power of the experiment. Same patterns can be observed in the eigenvectors
of PCA. Four PCs were able to explain about 80% of the original variance. The first two PCs basically
underlined accurately the same structure found with the first two factors in FA, whereas PC3 and PC4
summarized the structure of F3. The score that each QTL gets on each Factor or PC could be useful
to classify the original QTL records and make them more comparable once that the redundancy of
information has been removed
- …